11 - Lecture 10, 19 Dec 2022 [ID:46215]
50 von 1086 angezeigt

More examples, but the key here is to spot patterns, to see what the approximate bound

of that algorithm might be and then to make this bound as tight as possible to categorize

it in one of these main categories.

That's really what it boils down to, remember a few examples for each of these categories

and then you should be fine because most of the stuff you'll do will be in one of these

categories.

But let's quickly recap.

So why do we want to do it?

We want to reason about an algorithm to really, to be able to predict the amount of time it

will take to execute it in the worst case, if you have to go through all of the elements.

And does it make sense to implement it that way?

Very large.

Or does it make a difference if I implement it that way or that way?

It might be implementation specific, but in general some algorithms might just be equivalent

in terms of worst case runtime.

Here there is the kind of best case, average case and worst case in program efficiency.

We only look at the worst case.

Average case, you'll observe most of the time, best case, very unlikely you achieve.

Your algorithm is not required.

So yeah, this is just an order of measure.

It's not really an exact way to calculate it despite some of the, well two of the examples

I'll show you look very exact because we look at it, we use a few mathematical tricks, but

really in general it's only about the order of growth and the largest factors that define

the runtime.

We are not interested in just more things like memory access or assignments.

And again, this is the kind of list you should have somewhere in your mind.

These are the kind of most prevalent algorithms.

Sometimes just anything like assignment or addition or something memory lookup or refer

to anything, any kind of, we also call it boilerplate code, anything that's required

to make your program run but not the kind of core essence.

There are not, actually there's nothing really exciting in there.

There are no algorithms that would be worth noting except these kind of boilerplate stuff.

The interesting parts start here.

So most of your algorithms would probably be in one of these categories, O of N if you

have to visit every element of a list, for example, if you have to progress through a

number and then go by increments of one, then you'll find an O N, log N, anything that's

tree-like can be found, this recursive without recursive, tree-like where you only need to

go down one branch.

So the depth of the tree is order log N. And that of course depends on how your tree looks

like.

Is it a balanced tree or is it a kind of unbalanced one?

Is it a binary tree?

But for the order, it doesn't really matter.

It's just the kind of worst case.

What's the longest path?

Well, the tree, the depth of the tree is as deep as the longest path.

So it's about log N elements.

Log linear are interesting algorithms.

I hope I get to something to show you here in this context is just a combination of these

two.

Zugänglich über

Offener Zugang

Dauer

01:20:39 Min

Aufnahmedatum

2022-12-19

Hochgeladen am

2022-12-19 18:46:03

Sprache

en-US

more about programme complexity and starting searching and sorting

Tags

Python programming
Einbetten
Wordpress FAU Plugin
iFrame
Teilen